TuckerDNCaching: high-quality negative sampling with tucker decomposition

نویسندگان

چکیده

Abstract Knowledge Graph Embedding (KGE) translates entities and relations of knowledge graphs (KGs) into a low-dimensional vector space, enabling an efficient way predicting missing facts. Generally, KGE models are trained with positive negative examples, discriminating positives against negatives. Nevertheless, KGs contain only facts; training requires generating negatives from non-observed ones in KGs, referred to as sampling. Since sensitive inputs, sampling becomes crucial, the quality critical training. Generative adversarial networks (GAN) self-adversarial methods have recently been utilized address vanishing gradients observed early methods. However, they introduce problem false high probability. In this paper, we extend idea reducing by adopting Tucker decomposition representation, i.e., TuckerDNCaching, enhance semantic soundness latent among introducing relation feature space. TuckerDNCaching ensures generated samples, experimental results reflect that our proposed method outperforms existing state-of-the-art

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Some Theory on Non-negative Tucker Decomposition

Some theoretical difficulties that arise from dimensionality reduction for tensors with non-negative coefficients is discussed in this paper. A necessary and sufficient condition is derived for a low nonnegative rank tensor to admit a non-negative Tucker decomposition with a core of the same non-negative rank. Moreover, we provide evidence that the only algorithm operating mode-wise, minimizing...

متن کامل

Equivariant and scale-free Tucker decomposition models

Analyses of array-valued datasets often involve reduced-rank array approximations, typically obtained via least-squares or truncations of array decompositions. However, least-squares approximations tend to be noisy in high-dimensional settings, and may not be appropriate for arrays that include discrete or ordinal measurements. This article develops methodology to obtain low-rank model-based re...

متن کامل

Fast Blocked Clause Decomposition with High Quality

Any CNF formula can be decomposed two blocked subsets such that both can be solved by BCE (Blocked Clause Elimination). To make the decomposition more useful, one hopes to have the decomposition as unbalanced as possible. It is often time consuming to achieve this goal. So far there have been several decomposition and post-processing algorithms such as PureDecompose, QuickDecompose, EagerMover ...

متن کامل

Accelerating the Tucker Decomposition with Compressed Sparse Tensors

The Tucker decomposition is a higher-order analogue of the singular value decomposition and is a popular method of performing analysis on multi-way data (tensors). Computing the Tucker decomposition of a sparse tensor is demanding in terms of both memory and computational resources. The primary kernel of the factorization is a chain of tensor-matrix multiplications (TTMc). State-of-the-art algo...

متن کامل

Alternating proximal gradient method for sparse nonnegative Tucker decomposition

Multi-way data arises inmany applications such as electroencephalography classification, face recognition, text mining and hyperspectral data analysis. Tensor decomposition has been commonly used to find the hidden factors and elicit the intrinsic structures of the multi-way data. This paper considers sparse nonnegative Tucker decomposition (NTD), which is to decompose a given tensor into the p...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Intelligent Information Systems

سال: 2023

ISSN: ['1573-7675', '0925-9902']

DOI: https://doi.org/10.1007/s10844-023-00796-y